Lecture � MIT 6.825 Techniques in AI � Agent programs � problem-solving problems

Greg Detre

09:30 Tuesday, September 10, 2002

 

agent programs � how to deal with the lack of computational power

make a giant lookup table?

 

Assumptions

assume (to begin with) that:

1.      the agent knows the world dynamics

learning allows us to avoid this assumption

2.      world state is finite and not too big (to enumerate)

logic allows us to avoid enumerating states

3.      world is deterministic

uncertainty allows us to avoid this

4.      agent knows the current state

disjunctive logic + uncertainty representations allow us to avoid this

Example � planning a route through a map

see Russell + Norwig??? textbook

Formal definition

set of states � e.g. cities

initial state � e.g. A

operators � mappings from states to states S S, i.e. the roads

goal test � tells us whether we�re done, i.e. a mapping from S {t,f}

path cost � (S,O)* R����� often articulable as a sum �span>c(Si,Oi)

 

Search algorithms � uninformed

list of nodes in a search tree

each node is a partial path towards the goal

you can�t map from the tree back to the graph, i.e. the nodes on the tree don�t represent cities, but actually city-by-way-of-another-city (e.g. ZA represents being at Z via A)

i.e. there will be many more nodes than cities

Agenda

          put the start state in the agenda

          loop:

get a path out of the agenda

if goal, then return it

expand (put children in agenda)

 

Depth-first search

go as far as you can till things aren�t working, then backtrack the minimum amount and try again

i.e. treat the agenda as a stack

put children at top

take new path off top

 

need to avoid revisiting old nodes (i.e. getting into a loop) � 2 strategies:

1.      don�t add a node (path) to the agenda if it�s already on the agenda

2.      don�t add a state to the agenda if we�ve already expanded it

this method avoids a lot of extra work

 

not optimal in any sense � but very easy to implement

How much time and space?

b = branching factor (how many times it branches from a node � maximum)

m = maximum depth of tree

m corresponds to the longest non-looping path (which may well be longer than the longest path from a to b if there�s a wild-goose-chase path)

d = goal depth (the depth in the search tree at which a goal occurs � usually the min goal)

 

branches die when you�ve been all through them and end up somewhere you�ve been before

 

time = O(bm)

space = O(mb)

 

Breadth-first search

here, the agenda is a queue

take a new node from the front, and add children at the end

this will always find the shortest path in terms of hops

doesn�t take path cost into account

How much time + space?

it will have to go down d levels, and there are about bd nodes in d levels

time = O(bd)

space = O(bd)

so the time is faster, better solution, but bigger space requirements than depth-first

 

Iterative deepening solution

best of both worlds

successive depth-first searches with an increasing depth bound

������ depth bound����������� space����������������� time

������ 1��������������������������������� b������������������������� b

������ 2��������������������������������� 2b����������������������� b2

������ 3��������������������������������� 3b����������������������� b3

������ ������������������������������ �����������������������

������ max��������������������������� O(bd)���������������� bd+1

 

Uniform cost search

agenda is a priority queue

priorities = length of path so far

a priority queue � take out the thing with the lowest (or highest) priority gets taken out first each time

guaranteed to generate the least-cost path

this is why it�s important to have different representations for different routes to the same city

you can also save space by getting rid of longer paths to the same place (though if you don�t, they�ll just be ignored at the cost of space)

 

R&N assert that:

time = O(bd)

LPK assert:

time = O(bm)

 

however, the length of the early steps may not be an indication of the length further down

heuristic: h(state) is an estimate of the minimum cost to a goal

for instance, you could use the euclidean distance to your goal (e.g. for a geographical map)

use this heuristic function to bias the search

A* search

agenda is still a priority queue

priorities = g(p) + h(p)

where:

g(p) = length of path so far

h(p) = heuristic for how far from goal

this is guaranteed to generate the least-cost path

however, h has to be an underestimate of the cost to the goal

otherwise, a route that appears to be taking you very near to the goal might be blocked by a ravine

in a way, a uniform cost search is a kind of A* search with a really lame heuristic

if you have a good h (i.e. it�s an underestimate, but pretty tight), then it really helps

if h is right, then you go straight to your goal

goal nodes h = 0

if h is too high (i.e. an overestimate), then it could take you to a non-optimal goal, by keeping you from exploring a path that is optimal which you ignore because its h takes its overall priority above other non-optimal paths

 

Admin

6.825-students@mit.edu

web page pdf handouts

stellar.mit.edu

need privileges

videos, ppt with voice-over

transcripts

 

Questions

�(S,O)* R� � what does the star mean???

do I understand the path cost sum expression �span>c(Si,Oi) ???

search algorithm � uninformed vs informed???

�don�t add a node to the agenda if we�ve already expanded it� � why is this method better???

how is iterative deepening different from a breadth-first search???

I think LPK are wrong about their uniform cost time estimate � is that right???

maybe it would be good to add a tiny stochastic aspect that goes down funny little high-cost alleys in the hope that it�ll get lucky and find that it leads somewhere good, which you can then explore further???

y, kind of � that requires a heuristic to tell you if your stochastic approach has taken you somewhere helpful or not